</span><span class="style5">To run the program:</span><span class="style4">
1) The program requires Hypercard Player, or Hypercard 1.2.5 or above;
2) Check that you have the following 22 files: P1-P10, STARTUP, documentation, neuralnetworks, simulator, numbers, num1, graph1, graph2, tut1.1, simul.main, TreffNet.ReadMe, TreffNet.Documentation;
3) Load all files into a </span><span class="style5">single</span><span class="style4"> </span><span class="style5">folder</span><span class="style4"> called TreffNet on the hard disk of your Macintosh;
4) Double-click on the stack called </span><span class="style5">STARTUP</span><span class="style4">.
TreffNet is a simple, Hypercard-based introduction to learning in neural networks. It consists of a tutorial on the simplest of network learning methods called Hebbian learning. The stacks consist of (1) tutorial animations, and (2) a graphical network simulator that can learn to associate two pattern vectors. Based on the first chapter of the classic text by Rumelhart and McClelland (1986), it "brings to life" the basic concepts and mathematics that need to be fully understood before the student tackles more complex systems.
TreffNet was designed to be distributed as </span><span class="style5">Shareware.</span><span class="style4"> It is the hope of the author that if you like the package and find it useful in your own learning and teaching others about neural nets, then you send a small Shareware fee of $15 to the author. Alternatively, you can send me a CD of music from your part of the World! Receipt of your donation will result in me sending you a non-Shareware version of the program. TreffNet</span><span class="style5"> </span><span class="style4">cannot be sold or distributed for profit</span><span class="style5"> </span><span class="style4">without the author's written consent. However, distribution via online services is acceptable provided that all original components and unaltered files are included in the package.
Treffnet is supplied as is. All responsibilities lie in the user, not the author. The person using TreffNet bears all risk as to its quality and performance. No responsibility is taken by the author as to the program's consequences, neither explicit nor implied.
This program explains some of the major concepts needed to understand learning and representation in models of nervous systems. In recent years, the field of psychology has experienced a renaissance of interest in the biological and physical basis of memory, learning, perception and action. This has partly been due to the power which computer modeling has provided researchers in simulating our own and other animals' behavior (e.g., Anderson & Hinton, 1981).
The last decade has also seen the development of elegant and powerful artificial neural-network mechanisms which simulate some of the details of real nervous systems (McClelland & Rumelhart, 1986; Rumelhart & McClelland, 1986). The recent excitement about neural networks is due to the hypothesis that mental states (e.g. memories, dreams and perceptions) are a kind of "emergent property" of the many simple processes at the microscopic scale of continuous neural activity. Neural networks show that "knowledge" may not exist in the brain as simple "pictures", words or symbols (as the approach of symbolic AI assumed in the 1970's), but instead may be coded as a diffuse, distributed pattern of activity distributed across large ensembles of neurons. Neural network models allow the simulation of certain facts from neuropsychology and indicate that “thoughts” and “percepts” are not constructed and represented in any single, local part of the brain, but are instead distributed and superimposed over large areas of nervous tissue in a pattern of electrical and chemical activity. In particular, because different patterns are all stored in the </span><span class="style17">same</span><span class="style3"> nervous tissue, very interesting effects can naturally occur such as the mixing and blending of one pattern with another to creating new patterns and generalizations.
In a connectionist model, knowledge is embodied or stored as a </span><span class="style17">simultaneous pattern</span><span class="style3"> of neural strengths in the multiple </span><span class="style17">connections</span><span class="style3"> between neurons. This approach to cognitive science has therefore become known as "Parallel Distributed Processing" (PDP) (McClelland & Rumelhart, 1986), but is just as often called connectionist or network modeling, for obvious reasons.
Although such a constructivist approach toward explaining mental states may in fact be fundamentally flawed due to an emphasis on the organism alone with no serious investigation of the impact of information from the environment (Treffner, 1987), neural networks do provide a powerful mechanism in which to examine the material reductionism in the representationist hypothesis that </span><span class="style17">mental states may be brain states</span><span class="style3">.
</span><span class="style6"> </span><span class="style16">Implications of connectionism</span><span class="style3">
Connectionist modeling has been used to help understand the mechanisms upon which human behavior are founded. It has also been used to support the suggestion that animals and humans may be more similar psychologically than previously realized. We can learn about ourselves through studying the neural processes of other organisms and then showing that both depend upon similar principles that can be modelled. The philosopher Descartes, and those who followed him, believed that non-human species were simple, unconscious machines but that we were different since we possessed "consciousness". The new models of neural networks show how the complex behavior of both humans and animals might be conceived in the same physical terms—as the result of a miriad of interactions occurring in their nervous systems. Old issues such as the existence of animal consciousness can then be given a fresh interpretation since we can ask questions guided by the similarities between neural facts, predictions from network models, and observed behavior.
Although highly successful, the neural network approach is not without its critics. For example, the central problem of how to discover or define appropriate features on the input units (the problem of </span><span class="style17">information</span><span class="style3">!) has never been adequately addressed by network representationist approaches (Treffner, 1987). One answer to this fundamental issue comes from a realisation of the the deep evolutionary and ecological constraints that must apply to any purported mechanism underlying the dynamics of perception and action. From this perspective, network architectures would need to conform to the level of dynamical and ecological constraint, a level of constraint usually ignored in network modelling (Gibson, 1979; Kelso, 1995). It is perhaps with respect to this informational and ecological level of analysis that network models might still provide useful insights into the mechanisms supporting the behavior of complex systems (Treffner, 1987, 1997).
It is hoped that this tutorial will provide the student with a concrete grasp of the basic hypothesis of connectionist representation: how many very simple and meaningless processes at a microscopic neural scale might provide a substrate on which macroscopic ensemble effects emerge at the scale of everyday experience.
</span><span class="style6">Anderson, J. A., & Hinton, G. E. (1981/1989).</span><span class="style3"> </span><span class="style17">Parallel models of associative memory.</span><span class="style3"> Lawrence Earlbaum Associates. (The volume which set the ball rolling for the PDP revival of the 1980's.)
</span><span class="style6">Caudill, M., & Butler, C. (1990).</span><span class="style3"> </span><span class="style17">Naturally intelligent systems.</span><span class="style3"> MIT Press. (A readable introduction to the main ideas of connectionism. No equations, no references, just English!)</span><span class="style6">
Gibson, J. J. (1979/1986).</span><span class="style3"> The ecological approach to visual perception. LEA. (The classic text on perception by a scientific visionary).
</span><span class="style6">Gonzales, M. E. Q., Treffner, P. J., & French, T. (1988).</span><span class="style3"> A naturalistic approach to mental representation. Presented at the Int. Conf. on Thinking, Aberdeen. Reprinted in Gilhooly, K. J., Keane, M. T. G., Logie, R. H., & Erdos, G. (1990), </span><span class="style17">Lines of thinking: Reflections on the psychology of thought</span><span class="style3">, Wiley. (Reconsiders the debate between AI and PDP from the standpoint of ecological psychology.)
</span><span class="style6">Kelso, J. A. S. (1995).</span><span class="style3"> Dynamic patterns: The self-organisation of brain and behavior. MIT Press. (Classic text on dynamical systems in psychology).
</span><span class="style6">McClelland, J. L., & Rumelhart, D. E. (1986).</span><span class="style3"> </span><span class="style17">Parallel distributed processing, Vol. 2: Psychological and biological models.</span><span class="style3"> MIT Press.
&
</span><span class="style6">Rumelhart, D. E., & McClelland, J. L. (1986).</span><span class="style3"> </span><span class="style17">Parallel distributed processing, Vol. 1: Foundations. </span><span class="style3">MIT Press. (These two researchers originated the term "PDP" and their volumes have become the essential references in the area. The necessary equations are explained in sufficiently transparent detail.)
</span><span class="style6">Rumelhart, D. E., & McClelland, J. L. (1988).</span><span class="style3"> Explorations in parallel distributed processing. (A companion volume to PDP 1 & 2. Tutorial format which includes programs for simulating the many different network models available. Additional explanation of the equations. Inexpensive.)
</span><span class="style6">Treffner, P. J. (1987).</span><span class="style3"> Ecological connectionism and animal-environment mutuality. In M. Caudill (Ed.) </span><span class="style17">Proc. IEEE 1st Int. Conf. on Neural Networks,</span><span class="style3"> </span><span class="style17">Vol. 2</span><span class="style3">, pp. 813-820, San Diego.
</span><span class="style6">Treffner, P. J. (1997).</span><span class="style3"> Representation and specification in the dynamics of cognition: Review of Robert F. Port and Tomothy van Gelder (Eds.), Mind as motion: Explorations in the dynamics of cognition. Contemporary Psychology, Vol 42, Num 8.
</span></text>
</content>
<content>
<layer>card</layer>
<id>2</id>
<text><======
Make sure scroll bar is at top before reading!
</text>
</content>
<content>
<layer>card</layer>
<id>3</id>
<text>Background</text>
</content>
<name></name>
<script></script>
</card>
card_5355.xml
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE card PUBLIC "-//Apple, Inc.//DTD card V 2.0//EN" "" >
The program is divided into </span><span class="style6">two</span><span class="style3"> </span><span class="style6">parts</span><span class="style3">. The first part consists of Menu Options 2 to 9 and provides a </span><span class="style6">Tutorial</span><span class="style3"> on neural network models of associative memory. Starting with an outline of what a single neuron is and does, the user can see how to build a network which implements a simple pattern associator using a form of synaptic weight adjustment called "Hebbian learning". It is strongly recommended that the user works through all these tutorials in sequential order. Just click on the topic in the Menu screen.
The user can view screens as fast as he or she wishes; the tutorials are, for the most part, self-paced. This means the user can click on a </span><span class="style6">NEXT</span><span class="style3"> button at the bottom of the screen to view the next screen in a given tutorial. You can return to the menu at any stage in a tutorial by clicking the </span><span class="style6">MENU</span><span class="style3"> button in the bottom corner of the screen.
</span><span class="style18">TO STOP PROGRAM EXECUTION</span><span class="style3">: If at any stage the user wishes to stop program execution, then the "Apple/command” key should be held down while simultaneously hitting the “.” (period) key.
Menu Option 10 takes the user to the other major part of the program which is a neural network </span><span class="style6">Simulator</span><span class="style3"> itself. The simulator allows the user to watch the network learn a Hebbian association between two patterns of neural activity, and subsequently test the new pattern of connection weights by presenting incomplete or partial patterns. This demonstrates the process of so-called "pattern completion" which is thought by some researchers to occur in the brain, for example, when seeing parts of objects that are occluded by other objects. In this case we know (or perceive) that a whole object is really there rather than only the part visible (e.g. we know or perceive that a person occluded by a table isn’t really half a person!). The simulator network is based on the simple introductory network presented pp. 34, 35 of Rumelhart and McClelland (1986). Readers might find it useful to refer back to that chapter while using the simulator.
A second aspect of the simulator demonstrates how five digits (e.g., 1, 2, 3, 4, 5) may each be represented as a pattern of activation and then learned and stored in the same network. This can be considered a "real world" example of pattern recognition and pattern completion. In addition, the numbers simulation also allows the user to build NEW associative networks of ANY NUMBER of input and output units, thus permitting the learning of any association required.
</span></text>
</content>
<content>
<layer>card</layer>
<id>2</id>
<text><======
Make sure scroll bar is at top before reading!
</text>
</content>
<content>
<layer>card</layer>
<id>3</id>
<text>The Programs</text>
</content>
<name></name>
<script></script>
</card>
card_5591.xml
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE card PUBLIC "-//Apple, Inc.//DTD card V 2.0//EN" "" >
The simulator allows the user to experiment with the Hebbian learning process by entering pairs of patterns to be associated together and then watching how the connection strengths change. The best way to become acquainted with the simulator is to click </span><span class="style6">SIMULATOR INTRODUCTION</span><span class="style3"> in the </span><span class="style6">SIMULATOR MENU</span><span class="style3">. This eventually takes you to the simulator itself. You can then follow the instructions given on how to enter the input and target patterns to be learned.
Be sure to read below what the function of the various buttons, fields, and screens are before clicking on </span><span class="style6">SIMULATOR INTRODUCTION</span><span class="style3">.
</span><span class="style16">GRAPHICS vs. MATRIX DISPLAYS</span><span class="style6">:</span><span class="style16">
</span><span class="style3">
You have the option of either of two display formats:
</span><span class="style6">1) GRAPHICS: </span><span class="style3">a "classical" network-type display with the input units below fully connected to the output units above them, or
</span><span class="style6">2) MATRIX:</span><span class="style3"> a "matrix" display which allows a more transparent picture of the pattern of individual weights in terms of positive and negative values (i.e. the "weight matrix")
The advantage of the </span><span class="style6">GRAPHICS</span><span class="style3"> mode is that it uses a display format analogous to that previously used in the introductory tutorials.
The advantage of the </span><span class="style6">MATRIX</span><span class="style3"> display is that in addition to the clear display of weight patterns, this format is also used in the </span><span class="style6">NUMBER LEARNING NETWORK</span><span class="style3"> example which requires 15 input and 15 output/target units. This means there are a total of 225 weights which can be changed each cycle which is far too many weights to display using the </span><span class="style6">GRAPHICS</span><span class="style3"> type format.
</span><span class="style16">SETUP</span><span class="style6">:</span><span class="style3"> The </span><span class="style6">SETUP</span><span class="style3"> button prompts the user for
(1) the total number of input-target pairs to be associated, e.g. 2 means: 2 input-target pairs (which is 4 patterns in total);
(2) the learning rate parameter, "r", e.g. 0.25;
(3) the actual input and target patterns with each individual value ("1" or "-1") separated by commas (","), e.g. 1,-1,-1,1
(4) whether the user requires extra information to be displayed during learning. Clicking on "</span><span class="style6">Yes</span><span class="style3">" will slow the learning procedure down, but will also display extra messages explaining individual stages in the learning process.
</span><span class="style16">Direct entry of patterns and parameters</span><span class="style6">:</span><span class="style3"> All the above values (except extra information) may be entered without clicking the </span><span class="style6">SETUP</span><span class="style3"> button. Simply place the cursor in the appropriate field, darken any previous values to delete them, and enter the new values in their place. Be sure such parameters as the number of pairs and the learning rate are entered.
</span><span class="style16">LEARN</span><span class="style6">:</span><span class="style3"> The </span><span class="style6">LEARN</span><span class="style3"> button should be pressed after all appropriate setup values have been entered either by using the </span><span class="style6">SETUP</span><span class="style3"> button, or by manually entering the values in the appropriate fields. There is a </span><span class="style6">LEARN</span><span class="style3"> button for both </span><span class="style6">Matrix</span><span class="style3"> and </span><span class="style6">Graphics</span><span class="style3"> display options.
</span><span class="style16">TEST</span><span class="style6">:</span><span class="style3"> The </span><span class="style6">TEST</span><span class="style3"> button may be clicked on completion of learning after a test pattern has been entered in the "Enter test input" field. There is a </span><span class="style6">TEST</span><span class="style3"> button for both the </span><span class="style6">Matrix</span><span class="style3"> and </span><span class="style6">Graphics</span><span class="style3"> formats.
</span><span class="style16">GO TO</span><span class="style6">:</span><span class="style3"> Below the label "Go To" are two buttons to take the user either to the </span><span class="style6">Graphics</span><span class="style3"> or </span><span class="style6">Matrix</span><span class="style3"> displays.
</span><span class="style16">MENU</span><span class="style6">:</span><span class="style3"> The </span><span class="style6">MENU</span><span class="style3"> button takes the user back to the main menu.
This field displays the total number of pattern pairs to be learned and should be entered by the user in the </span><span class="style6">Setup</span><span class="style3"> </span><span class="style6">screen</span><span class="style3">.
This field displays the learning rate, "r", which should have been entered by the user in the </span><span class="style6">Setup</span><span class="style3"> </span><span class="style6">screen</span><span class="style3">.
</span><span class="style16">INPUTS and TARGETS</span><span class="style6">:</span><span class="style3">
These two fields display input and target patterns. The patterns are entered by the user in the </span><span class="style6">Setup</span><span class="style3"> </span><span class="style6">screen</span><span class="style3">.
The </span><span class="style6">Graphics</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3"> displays a fully interconnected neural network with one layer of connection weights. Input units are below with their activation values shown within each unit. Output units are above the inputs with activations shown within each unit. The target units are above the output units. Weights are shown on each connection from input to output unit.
</span><span class="style16">LEARN</span><span class="style6">:</span><span class="style3"> The patterns previously entered in the </span><span class="style6">Setup</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3"> may be learned by pressing the </span><span class="style6">LEARN</span><span class="style3"> button in the Graphics screen.
</span><span class="style16">TEST</span><span class="style6">:</span><span class="style3"> Pressing the </span><span class="style6">TEST</span><span class="style3"> button takes the user to the </span><span class="style6">Setup</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3"> where a test input can be entered. Appropriate fields and buttons are flashed to indicate the user's next appropriate action.
</span><span class="style16">SHOW INPUTS</span><span class="style6">:</span><span class="style3"> At the completion of learning, the </span><span class="style6">SHOW INPUTS</span><span class="style3"> button can display the input-target pairs for comparison to the actual output produced by the network. This is especially useful after testing a partial pattern as test input to check whether the output produced was equal to the output expected from previous learning. Clicking the </span><span class="style6">SHOW</span><span class="style3"> </span><span class="style6">INPUTS</span><span class="style3"> button once displays pattern pairs. Clicking it again hides the patterns.
</span><span class="style16">PATTERN NUMBER</span><span class="style6">:</span><span class="style3"> The curre</span><span class="style18">nt</span><span class="style3"> number of the pattern pair currently being associated by the network is shown in the field called </span><span class="style6">PATTERN No. </span><span class="style3"> As each pattern pair is learned, the next new input pattern is placed on the input units and the value in </span><span class="style6">PATTERN</span><span class="style3"> </span><span class="style6">No.</span><span class="style3"> is increases by one.
</span><span class="style16">CYCLE NUMBER</span><span class="style6">:</span><span class="style3"> </span><span class="style18">Th</span><span class="style3">e value in </span><span class="style6">CYCLE No.</span><span class="style16"> </span><span class="style3">increases each time the network completes a cycle. A cycle means one sweep of the input pattern through the weights and comparison of the newly computed outputs to the target pattern. Several cycles may be required before the output pattern equals the target pattern for a given input-target pair.
The buttons on the </span><span class="style6">Matrix</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3"> have exactly the same function as on the </span><span class="style6">Graphics</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3"> (see above). However note that they are positioned differently on the screen. The </span><span class="style6">Matrix</span><span class="style3"> display provides a nice representation of the weights and is preferable to the </span><span class="style6">Graphics</span><span class="style3"> format for seeing how they are patterned in terms of positive or negative values. This is especially the case when the user explores the "Numbers Simulation" demo that has far too many weights to display in the </span><span class="style6">Graphics</span><span class="style3"> format.
</span><span class="style16">SAVING WEIGHTS</span><span class="style6">: </span><span class="style3">The </span><span class="style6">Matrix</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3"> also provides a facility for saving a set of weights once learned, and loading a set which were saved previously. This allows the user to quickly run the network with a set of weights which may have taken considerable time (!!) to learn initially. Click either the button </span><span class="style6">SAVE WTS </span><span class="style3">or </span><span class="style6">LOAD WTS</span><span class="style3"> for this function. Note that these buttons are not replicated in the </span><span class="style6">Graphics</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3"> so the user should go to the </span><span class="style6">Matrix</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3"> in order to save or load weights.
</span></text>
</content>
<content>
<layer>card</layer>
<id>2</id>
<text><======
Make sure scroll bar is at top before reading!
</text>
</content>
<content>
<layer>card</layer>
<id>3</id>
<text>The Network Simulator</text>
</content>
<name></name>
<script></script>
</card>
card_5735.xml
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE card PUBLIC "-//Apple, Inc.//DTD card V 2.0//EN" "" >
The numbers network allows the user to explore the ideas of PATTERN INTERFERENCE and PATTERN GENERALIZATION with a "real world" pattern recognition example. Interference occurs when pattern pairs which are associated together are too similar to each other that the network can not find an appropriate set of weights to store them in a distinct way. There is then "crosstalk" between the patterns and errors may occur in recalling (completing) a pattern.
Pattern </span><span class="style17">generalization</span><span class="style3"> also occurs when </span><span class="style17">different</span><span class="style3"> inputs are associated with the </span><span class="style17">same</span><span class="style3"> target (e.g. different "chair" visual patterns are each associated with the same pattern for the label "furniture"). If the network is then tested with a new pattern (similar to but </span><span class="style17">different</span><span class="style3"> from the initial "chair" patterns), an output may result which actually equals the "furniture" pattern. Thus the pattern of weights in the network can be said to represent the same "concept" of “furniture” which many </span><span class="style17">different</span><span class="style3"> chair input patterns all produce.
We shall explore this idea of generalization by storing numbers (e.g. 1, 2, 3, 4, 5) in the same network. It is recommended, as the example, that you store the first 5 digits (i.e. 1 to 5). Each number is entered into the network as a pattern of "1"s on a 3x5 rectangular grid. This is similar to the pixels of a computer screen. Only the inputs need be entered because the network will associate the input pattern with a copy of itself on the target units. Such a network is called an AUTOASSOCIATOR because it stores a pattern by associating the pattern with </span><span class="style17">itself</span><span class="style3"> (i.e. the input and the target is the same pattern).
________________________________________________
</span><span class="style16">OPERATION OF THE NUMBERS NETWORK</span><span class="style6">
</span><span class="style16">SETUP</span><span class="style6">: </span><span class="style3">As in the Simulator network, there is a </span><span class="style6">Setup</span><span class="style3"> and a </span><span class="style6">Matrix Screen</span><span class="style3">. The </span><span class="style6">Setup </span><span class="style3">Screen provides the basic control over initial parameters, etc. The </span><span class="style6">Matrix</span><span class="style3"> Screen provides a matrix display of the network learning process and weights. There is also a </span><span class="style6">Numbers</span><span class="style3"> Screen which allows the user to enter input patterns corresponding to digits. The </span><span class="style6">Numbers</span><span class="style3"> screen also displays, in digit form, the results of testing the network with partial input patterns.
This is also analogous to the </span><span class="style6">Matrix</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3"> of the </span><span class="style6">Simulator</span><span class="style3"> network, and also provides the capability for saving the set of weights once learned. This is very important since it may take </span><span class="style17">15 minutes</span><span class="style3"> or more to learn the weights for a set of five digits! Once learned, then the weights may be saved and loaded for future testing and demonstration of the </span><span class="style6">Numbers Network</span><span class="style3">.
</span><span class="style16">SETUP SCREEN
</span><span class="style3">
This has buttons analogous to those in the </span><span class="style6">Setup</span><span class="style3"> screen of the basic, 4-node Simulator network.
</span><span class="style16">SETUP</span><span class="style6">:</span><span class="style3"> This must be clicked </span><span class="style18">FIRST</span><span class="style3"> before any others. </span><span class="style6">SETUP</span><span class="style3"> allows the user to enter:
a) the total number of patterns to be learned, and
b) the pattern length.
For number patterns, the length </span><span class="style18">MUST</span><span class="style3"> equal 15 since there are 15 "pixels" in each number's representation.
</span><span class="style16">ENTER NUMBERS</span><span class="style6">:</span><span class="style3"> After the </span><span class="style6">SETUP</span><span class="style3"> procedure has been followed, the </span><span class="style6">ENTER NUMBERS</span><span class="style3"> button should be clicked. This takes the user to the </span><span class="style6">Numbers</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3">.
Each digit to be learned can be entered into the 3x5 grid by placing the cursor over a square and entering a "1". Where there is a space in the pattern representation of a number, do not enter anything in that square. Only enter a "1" where there would be "ink" in a normal representation of a digit.
For example: the representation of the digits "2" and "5" might look like this:
111 1111
1 1
111 1111
1 1
111 1111
Notice that there is a "1" absent wherever there is a blank space in the writing of the digit. You will be asked whether you want to see an example of the digit representation. As usual, you should initially answer with the "</span><span class="style6">Yes</span><span class="style3">" option to display some example digits.
</span><span class="style16">PUT PATTERN</span><span class="style6">:</span><span class="style3"> After you have created EACH complete digit pattern, click the button </span><span class="style6">PUT PATTERN</span><span class="style3">. This will store the inputs in the </span><span class="style6">Setup</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3"> and allow you to then enter another digit into the grid. Again click </span><span class="style6">PUT PATTERN</span><span class="style3">, after each digit is created.
</span><span class="style16">LEARN</span><span class="style3">: When you have entered all the patterns to be learned by clicking </span><span class="style6">PUT PATTERN</span><span class="style3">, you should click the </span><span class="style6">LEARN</span><span class="style3"> button which will take you to the </span><span class="style6">Setup</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3">. You should then click the </span><span class="style6">LEARN</span><span class="style3"> button which will flash. This will take you to the </span><span class="style6">Matrix</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3"> where the learning process will be displayed.
You can follow the learning process as it happens. However, for these 15 unit-long digit patterns, it takes about two minutes for each digit to be stored through weight modification. This is because of the large number of weights to be updated (15 inputs x 15 outputs = 225 weights in total!). If there are five or so patterns to be stored, it will take about 10-15 minutes for the network to learn all the patterns.
When the learning is complete, you will be asked whether you wish to </span><span class="style6">TEST</span><span class="style3"> the network with a new pattern, or one of the old ones. Answering "</span><span class="style6">Yes</span><span class="style3">" to the prompt will take you to the </span><span class="style6">Setup Screen</span><span class="style3"> where the </span><span class="style6">TEST INPUT</span><span class="style3"> field will flash. You can then either:
1) (This is </span><span class="style18">not</span><span class="style3"> recommended initially - see [2] below). Follow the prompt and enter any partial pattern in the </span><span class="style6">TEST</span><span class="style3"> field and then click the </span><span class="style6">TEST</span><span class="style3"> button. However, it is NOT recommended to follow this option initially since it is difficult to know what the input string actually represents. Instead, do the following:
2) (</span><span class="style18">Recommended</span><span class="style3"> for testing digits). Click instead the </span><span class="style6">NUMBERS</span><span class="style3"> button which will take you to the </span><span class="style6">Numbers Screen</span><span class="style3">. Here you can enter the complete or partial pattern for a digit previously learned by entering the pattern in the numbers grid and then clicking the </span><span class="style6">TEST</span><span class="style3"> button. This will test the network's ability for correct recall given the weights produced during learning. In addition you will be able to observe the pattern completion and pattern blending properties of the network.
</span><span class="style16">TEST</span><span class="style6">:</span><span class="style3"> Click the </span><span class="style6">TEST</span><span class="style3"> button after entering each test input pattern. This will take you to the </span><span class="style6">Setup</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3"> where you should click on the </span><span class="style6">TEST</span><span class="style3"> button. This takes you to the </span><span class="style6">Matrix</span><span class="style3"> </span><span class="style6">Screen</span><span class="style3"> where you can observe the output of each test input. After the output has been produced, click the </span><span class="style6">SHOW NUMBER</span><span class="style3"> button which will display the output pattern in digit form.
</span></text>
</content>
<content>
<layer>card</layer>
<id>2</id>
<text><======
Make sure scroll bar is at top before reading!
</text>
</content>
<content>
<layer>card</layer>
<id>3</id>
<text>The Numbers Network </text>
</content>
<name></name>
<script></script>
</card>
card_6048.xml
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE card PUBLIC "-//Apple, Inc.//DTD card V 2.0//EN" "" >
<text><span class="style3"> </span><span class="style16">USING THE NUMBERS NETWORK AS
</span><span class="style6"> </span><span class="style16">A GENERAL PATTERN ASSOCIATOR
</span><span class="style18">
</span><span class="style3"> The numbers network can be used as a general associative network for associating </span><span class="style17">any</span><span class="style3"> input of arbitrary length with </span><span class="style17">any </span><span class="style3">target. Thus the user may experiment with networks of different numbers of units. To save time, input and target patterns may be entered </span><span class="style17">directly</span><span class="style3"> into their respective fields in the </span><span class="style6">Setup Screen </span><span class="style3">without pressing the </span><span class="style6">SETUP</span><span class="style3"> button, as may the relevant parameters of </span><span class="style6">Total Patterns</span><span class="style3">, </span><span class="style6">Pattern Length</span><span class="style3">, and </span><span class="style6">Learning Rate</span><span class="style3">. Alternatively, these parameters may be entered by clicking the </span><span class="style6">SETUP</span><span class="style3"> button first which will prompt the user for each value. </span></text>